16 research outputs found
Motion Switching with Sensory and Instruction Signals by designing Dynamical Systems using Deep Neural Network
To ensure that a robot is able to accomplish an extensive range of tasks, it
is necessary to achieve a flexible combination of multiple behaviors. This is
because the design of task motions suited to each situation would become
increasingly difficult as the number of situations and the types of tasks
performed by them increase. To handle the switching and combination of multiple
behaviors, we propose a method to design dynamical systems based on point
attractors that accept (i) "instruction signals" for instruction-driven
switching. We incorporate the (ii) "instruction phase" to form a point
attractor and divide the target task into multiple subtasks. By forming an
instruction phase that consists of point attractors, the model embeds a subtask
in the form of trajectory dynamics that can be manipulated using sensory and
instruction signals. Our model comprises two deep neural networks: a
convolutional autoencoder and a multiple time-scale recurrent neural network.
In this study, we apply the proposed method to manipulate soft materials. To
evaluate our model, we design a cloth-folding task that consists of four
subtasks and three patterns of instruction signals, which indicate the
direction of motion. The results depict that the robot can perform the required
task by combining subtasks based on sensory and instruction signals. And, our
model determined the relations among these signals using its internal dynamics.Comment: 8 pages, 6 figures, accepted for publication in RA-L. An accompanied
video is available at this https://youtu.be/a73KFtOOB5
Stable deep reinforcement learning method by predicting uncertainty in rewards as a subtask
In recent years, a variety of tasks have been accomplished by deep
reinforcement learning (DRL). However, when applying DRL to tasks in a
real-world environment, designing an appropriate reward is difficult. Rewards
obtained via actual hardware sensors may include noise, misinterpretation, or
failed observations. The learning instability caused by these unstable signals
is a problem that remains to be solved in DRL. In this work, we propose an
approach that extends existing DRL models by adding a subtask to directly
estimate the variance contained in the reward signal. The model then takes the
feature map learned by the subtask in a critic network and sends it to the
actor network. This enables stable learning that is robust to the effects of
potential noise. The results of experiments in the Atari game domain with
unstable reward signals show that our method stabilizes training convergence.
We also discuss the extensibility of the model by visualizing feature maps.
This approach has the potential to make DRL more practical for use in noisy,
real-world scenarios.Comment: Published as a conference paper at ICONIP 202
Interactively Robot Action Planning with Uncertainty Analysis and Active Questioning by Large Language Model
The application of the Large Language Model (LLM) to robot action planning
has been actively studied. The instructions given to the LLM by natural
language may include ambiguity and lack of information depending on the task
context. It is possible to adjust the output of LLM by making the instruction
input more detailed; however, the design cost is high. In this paper, we
propose the interactive robot action planning method that allows the LLM to
analyze and gather missing information by asking questions to humans. The
method can minimize the design cost of generating precise robot instructions.
We demonstrated the effectiveness of our method through concrete examples in
cooking tasks. However, our experiments also revealed challenges in robot
action planning with LLM, such as asking unimportant questions and assuming
crucial information without asking. Shedding light on these issues provides
valuable insights for future research on utilizing LLM for robotics.Comment: 7 pages, 6 figures, accepted at SII 202
Compensation for undefined behaviors during robot task execution by switching controllers depending on embedded dynamics in RNN
Robotic applications require both correct task performance and compensation
for undefined behaviors. Although deep learning is a promising approach to
perform complex tasks, the response to undefined behaviors that are not
reflected in the training dataset remains challenging. In a human-robot
collaborative task, the robot may adopt an unexpected posture due to collisions
and other unexpected events. Therefore, robots should be able to recover from
disturbances for completing the execution of the intended task. We propose a
compensation method for undefined behaviors by switching between two
controllers. Specifically, the proposed method switches between learning-based
and model-based controllers depending on the internal representation of a
recurrent neural network that learns task dynamics. We applied the proposed
method to a pick-and-place task and evaluated the compensation for undefined
behaviors. Experimental results from simulations and on a real robot
demonstrate the effectiveness and high performance of the proposed method.Comment: To appear in IEEE Robotics and Automation Letters (RA-L) and IEEE
International Conference on Robotics and Automation (ICRA 2021
Online Self-Supervised Learning for Object Picking: Detecting Optimum Grasping Position using a Metric Learning Approach
Self-supervised learning methods are attractive candidates for automatic
object picking. However, the trial samples lack the complete ground truth
because the observable parts of the agent are limited. That is, the information
contained in the trial samples is often insufficient to learn the specific
grasping position of each object. Consequently, the training falls into a local
solution, and the grasp positions learned by the robot are independent of the
state of the object. In this study, the optimal grasping position of an
individual object is determined from the grasping score, defined as the
distance in the feature space obtained using metric learning. The closeness of
the solution to the pre-designed optimal grasping position was evaluated in
trials. The proposed method incorporates two types of feedback control: one
feedback enlarges the grasping score when the grasping position approaches the
optimum; the other reduces the negative feedback of the potential grasping
positions among the grasping candidates. The proposed online self-supervised
learning method employs two deep neural networks. : SSD that detects the
grasping position of an object, and Siamese networks (SNs) that evaluate the
trial sample using the similarity of two input data in the feature space. Our
method embeds the relation of each grasping position as feature vectors by
training the trial samples and a few pre-samples indicating the optimum
grasping position. By incorporating the grasping score based on the feature
space of SNs into the SSD training process, the method preferentially trains
the optimum grasping position. In the experiment, the proposed method achieved
a higher success rate than the baseline method using simple teaching signals.
And the grasping scores in the feature space of the SNs accurately represented
the grasping positions of the objects.Comment: 8 page
Realtime Motion Generation with Active Perception Using Attention Mechanism for Cooking Robot
To support humans in their daily lives, robots are required to autonomously
learn, adapt to objects and environments, and perform the appropriate actions.
We tackled on the task of cooking scrambled eggs using real ingredients, in
which the robot needs to perceive the states of the egg and adjust stirring
movement in real time, while the egg is heated and the state changes
continuously. In previous works, handling changing objects was found to be
challenging because sensory information includes dynamical, both important or
noisy information, and the modality which should be focused on changes every
time, making it difficult to realize both perception and motion generation in
real time. We propose a predictive recurrent neural network with an attention
mechanism that can weigh the sensor input, distinguishing how important and
reliable each modality is, that realize quick and efficient perception and
motion generation. The model is trained with learning from the demonstration,
and allows the robot to acquire human-like skills. We validated the proposed
technique using the robot, Dry-AIREC, and with our learning model, it could
perform cooking eggs with unknown ingredients. The robot could change the
method of stirring and direction depending on the status of the egg, as in the
beginning it stirs in the whole pot, then subsequently, after the egg started
being heated, it starts flipping and splitting motion targeting specific areas,
although we did not explicitly indicate them
Machine Learning Approach for Frozen Tuna Freshness Inspection Using Low-Frequency A-Mode Ultrasound
Despite the ubiquity of ultrasonography in nondestructive inspection, its application to high-attenuation materials is challenging. At frequencies less than 1 MHz, ultrasound can inspect high-attenuation materials owing to its high penetration ability. Such ultrasound data are acquired using a single-element transducer that generates single-channel signals (A-mode). However, low-frequency A-mode ultrasound signals have low-resolution caused by long wavelengths, and less information than B-mode images generated by multi-channel transducers. Discriminating low-resolution data is made possible by recent advances in machine learning technology. This study employs machine learning to develop an inspection method for high-attenuation frozen materials. This study focuses on the inspection of the freshness of frozen tuna, which has a large market but uses a destructive inspection method. We applied eight typical machine learning algorithms to A-mode signal data (43 samples, 3168 signals) of frozen tuna to calculate freshness scores; we used fast Fourier transform in the feature extraction process. Our experiments show that all algorithms could classify the freshness of frozen tuna with statistical significance ( < 0.05, one-tailed -test). Furthermore, we investigated the performance improvement in the mean (standard deviation) of the area under the receiver operating characteristic curves by taking the mean of the freshness scores on 24 signals. We observed that the best performance (quadratic discriminant analysis) increased from 0.619 (0.041) using a single signal to 0.724 (0.080) using 24 signals with statistical significance ( < 0.05, paired one-tailed -test). This is the first study that inspects frozen tuna using ultrasound and machine learning technology